17 research outputs found
Stable soft extrapolation of entire functions
Soft extrapolation refers to the problem of recovering a function from its
samples, multiplied by a fast-decaying window and perturbed by an additive
noise, over an interval which is potentially larger than the essential support
of the window. A core theoretical question is to provide bounds on the possible
amount of extrapolation, depending on the sample perturbation level and the
function prior. In this paper we consider soft extrapolation of entire
functions of finite order and type (containing the class of bandlimited
functions as a special case), multiplied by a super-exponentially decaying
window (such as a Gaussian). We consider a weighted least-squares polynomial
approximation with judiciously chosen number of terms and a number of samples
which scales linearly with the degree of approximation. It is shown that this
simple procedure provides stable recovery with an extrapolation factor which
scales logarithmically with the perturbation level and is inversely
proportional to the characteristic lengthscale of the function. The pointwise
extrapolation error exhibits a H\"{o}lder-type continuity with an exponent
derived from weighted potential theory, which changes from 1 near the available
samples, to 0 when the extrapolation distance reaches the characteristic
smoothness length scale of the function. The algorithm is asymptotically
minimax, in the sense that there is essentially no better algorithm yielding
meaningfully lower error over the same smoothness class. When viewed in the
dual domain, the above problem corresponds to (stable) simultaneous
de-convolution and super-resolution for objects of small space/time extent. Our
results then show that the amount of achievable super-resolution is inversely
proportional to the object size, and therefore can be significant for small
objects
Deep vs. shallow networks: An approximation theory perspective
The paper briefly reviews several recent results on hierarchical architectures for learning from examples, that may formally explain the conditions under which Deep Convolutional Neural Networks perform much better in function approximation problems than shallow, one-hidden layer architectures. The paper announces new results for a non-smooth activation function — the ReLU function — used in present-day neural networks, as well as for the Gaussian networks. We propose a new definition of relative dimension to encapsulate different notions of sparsity of a function class that can possibly be exploited by deep networks but not by shallow ones to drastically reduce the complexity required for approximation and learning
Applications of classical approximation theory to periodic basis function networks and computational harmonic analysis
In this paper, we describe a novel approach to classical approximation theory of periodic univariate and multivariate functions by trigonometric polynomials.
While classical wisdom holds that such approximation is too sensitive to the lack of smoothness of the target functions at isolated points, our constructions show how to overcome this problem. We describe applications to approximation by periodic basis function networks, and indicate further research in the direction of Jacobi expansion and approximation on the Euclidean sphere. While the paper is mainly intended to be a survey of our recent research in these directions, several results are proved for the
first time here